Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            We explore the use of a spatial mode sorter to image a nanomechanical resonator, with the goal of studying the quantum limits of active imaging and extending the toolbox for optomechanical force sensing. In our experiment, we reflect a Gaussian laser beam from a vibrating nanoribbon and pass the reflected beam through a commercial spatial mode demultiplexer (Cailabs Proteus). The intensity in each demultiplexed channel depends on the mechanical modeshapes and encodes information about their displacement amplitudes. As a concrete demonstration, we monitor the angular displacement of the ribbon’s fundamental torsion mode by illuminating in the fundamental Hermite-Gauss mode ( ) and reading out in the mode. We show that this technique permits readout of the ribbon’s torsional vibration with a precision near the quantum limit. Our results highlight new opportunities at the interface of quantum imaging and quantum optomechanics.more » « lessFree, publicly-accessible full text available August 1, 2026
- 
            Continuous-time Markov decision processes (CTMDPs) are canonical models to express sequential decision-making under dense-time and stochastic environments. When the stochastic evolution of the environment is only available via sampling, model-free reinforcement learning (RL) is the algorithm-of-choice to compute optimal decision sequence. RL, on the other hand, requires the learning objective to be encoded as scalar reward signals. Since doing such transla- tions manually is both tedious and error-prone, a number of techniques have been proposed to translate high-level objec- tives (expressed in logic or automata formalism) to scalar re- wards for discrete-time Markov decision processes. Unfortu- nately, no automatic translation exists for CTMDPs. We consider CTMDP environments against the learning objectives expressed as omega-regular languages. Omega- regular languages generalize regular languages to infinite- horizon specifications and can express properties given in popular linear-time logic LTL. To accommodate the dense- time nature of CTMDPs, we consider two different semantics of omega-regular objectives: 1) satisfaction semantics where the goal of the learner is to maximize the probability of spend- ing positive time in the good states, and 2) expectation seman- tics where the goal of the learner is to optimize the long-run expected average time spent in the “good states” of the au- tomaton. We present an approach enabling correct translation to scalar reward signals that can be readily used by off-the- shelf RL algorithms for CTMDPs. We demonstrate the effec- tiveness of the proposed algorithms by evaluating it on some popular CTMDP benchmarks with omega-regular objectives.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available